• DOMAIN: Electronics and Telecommunication
• CONTEXT: A communications equipment manufacturing company has a product which is responsible for emitting informative signals. Company wants to build a machine learning model which can help the company to predict the equipment’s signal quality using various parameters.
• DATA DESCRIPTION: The data set contains information on various signal tests performed:
• PROJECT OBJECTIVE: To build a classifier which can use the given parameters to determine the signal strength or quality
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
%matplotlib inline
tf.random.set_seed(42)
import warnings
warnings.filterwarnings('ignore')
#read signal.csv datafile
signal_data = pd.read_csv("NN Project Data - Signal.csv")
signal_data.head()
| Parameter 1 | Parameter 2 | Parameter 3 | Parameter 4 | Parameter 5 | Parameter 6 | Parameter 7 | Parameter 8 | Parameter 9 | Parameter 10 | Parameter 11 | Signal_Strength | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 7.4 | 0.70 | 0.00 | 1.9 | 0.076 | 11.0 | 34.0 | 0.9978 | 3.51 | 0.56 | 9.4 | 5 |
| 1 | 7.8 | 0.88 | 0.00 | 2.6 | 0.098 | 25.0 | 67.0 | 0.9968 | 3.20 | 0.68 | 9.8 | 5 |
| 2 | 7.8 | 0.76 | 0.04 | 2.3 | 0.092 | 15.0 | 54.0 | 0.9970 | 3.26 | 0.65 | 9.8 | 5 |
| 3 | 11.2 | 0.28 | 0.56 | 1.9 | 0.075 | 17.0 | 60.0 | 0.9980 | 3.16 | 0.58 | 9.8 | 6 |
| 4 | 7.4 | 0.70 | 0.00 | 1.9 | 0.076 | 11.0 | 34.0 | 0.9978 | 3.51 | 0.56 | 9.4 | 5 |
print("The number of columns are : " , signal_data.shape[1])
print("The number of rows are : " , signal_data.shape[0])
The number of columns are : 12 The number of rows are : 1599
signal_data.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1599 entries, 0 to 1598 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Parameter 1 1599 non-null float64 1 Parameter 2 1599 non-null float64 2 Parameter 3 1599 non-null float64 3 Parameter 4 1599 non-null float64 4 Parameter 5 1599 non-null float64 5 Parameter 6 1599 non-null float64 6 Parameter 7 1599 non-null float64 7 Parameter 8 1599 non-null float64 8 Parameter 9 1599 non-null float64 9 Parameter 10 1599 non-null float64 10 Parameter 11 1599 non-null float64 11 Signal_Strength 1599 non-null int64 dtypes: float64(11), int64(1) memory usage: 150.0 KB
print("The percentage of missing values in each column are : \n " , signal_data.isnull().sum()/len(signal_data))
The percentage of missing values in each column are : Parameter 1 0.0 Parameter 2 0.0 Parameter 3 0.0 Parameter 4 0.0 Parameter 5 0.0 Parameter 6 0.0 Parameter 7 0.0 Parameter 8 0.0 Parameter 9 0.0 Parameter 10 0.0 Parameter 11 0.0 Signal_Strength 0.0 dtype: float64
signal_data.duplicated().sum()/len(signal_data)
0.150093808630394
signal_data[signal_data.duplicated()]
| Parameter 1 | Parameter 2 | Parameter 3 | Parameter 4 | Parameter 5 | Parameter 6 | Parameter 7 | Parameter 8 | Parameter 9 | Parameter 10 | Parameter 11 | Signal_Strength | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 4 | 7.4 | 0.700 | 0.00 | 1.90 | 0.076 | 11.0 | 34.0 | 0.99780 | 3.51 | 0.56 | 9.4 | 5 |
| 11 | 7.5 | 0.500 | 0.36 | 6.10 | 0.071 | 17.0 | 102.0 | 0.99780 | 3.35 | 0.80 | 10.5 | 5 |
| 27 | 7.9 | 0.430 | 0.21 | 1.60 | 0.106 | 10.0 | 37.0 | 0.99660 | 3.17 | 0.91 | 9.5 | 5 |
| 40 | 7.3 | 0.450 | 0.36 | 5.90 | 0.074 | 12.0 | 87.0 | 0.99780 | 3.33 | 0.83 | 10.5 | 5 |
| 65 | 7.2 | 0.725 | 0.05 | 4.65 | 0.086 | 4.0 | 11.0 | 0.99620 | 3.41 | 0.39 | 10.9 | 5 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 1563 | 7.2 | 0.695 | 0.13 | 2.00 | 0.076 | 12.0 | 20.0 | 0.99546 | 3.29 | 0.54 | 10.1 | 5 |
| 1564 | 7.2 | 0.695 | 0.13 | 2.00 | 0.076 | 12.0 | 20.0 | 0.99546 | 3.29 | 0.54 | 10.1 | 5 |
| 1567 | 7.2 | 0.695 | 0.13 | 2.00 | 0.076 | 12.0 | 20.0 | 0.99546 | 3.29 | 0.54 | 10.1 | 5 |
| 1581 | 6.2 | 0.560 | 0.09 | 1.70 | 0.053 | 24.0 | 32.0 | 0.99402 | 3.54 | 0.60 | 11.3 | 5 |
| 1596 | 6.3 | 0.510 | 0.13 | 2.30 | 0.076 | 29.0 | 40.0 | 0.99574 | 3.42 | 0.75 | 11.0 | 6 |
240 rows × 12 columns
#dropping dublicate records
signal_data2 = signal_data.drop_duplicates(keep='last')
signal_data2.duplicated().sum()
0
signal_data2.shape
(1359, 12)
signal_data2.isna().sum()
Parameter 1 0 Parameter 2 0 Parameter 3 0 Parameter 4 0 Parameter 5 0 Parameter 6 0 Parameter 7 0 Parameter 8 0 Parameter 9 0 Parameter 10 0 Parameter 11 0 Signal_Strength 0 dtype: int64
signal_data2['Signal_Strength'].value_counts()
5 577 6 535 7 167 4 53 8 17 3 10 Name: Signal_Strength, dtype: int64
sns.histplot(signal_data2['Signal_Strength']);
signal_data2.columns
Index(['Parameter 1', 'Parameter 2', 'Parameter 3', 'Parameter 4',
'Parameter 5', 'Parameter 6', 'Parameter 7', 'Parameter 8',
'Parameter 9', 'Parameter 10', 'Parameter 11', 'Signal_Strength'],
dtype='object')
signal_data2.describe().T
| count | mean | std | min | 25% | 50% | 75% | max | |
|---|---|---|---|---|---|---|---|---|
| Parameter 1 | 1359.0 | 8.310596 | 1.736990 | 4.60000 | 7.1000 | 7.9000 | 9.20000 | 15.90000 |
| Parameter 2 | 1359.0 | 0.529478 | 0.183031 | 0.12000 | 0.3900 | 0.5200 | 0.64000 | 1.58000 |
| Parameter 3 | 1359.0 | 0.272333 | 0.195537 | 0.00000 | 0.0900 | 0.2600 | 0.43000 | 1.00000 |
| Parameter 4 | 1359.0 | 2.523400 | 1.352314 | 0.90000 | 1.9000 | 2.2000 | 2.60000 | 15.50000 |
| Parameter 5 | 1359.0 | 0.088124 | 0.049377 | 0.01200 | 0.0700 | 0.0790 | 0.09100 | 0.61100 |
| Parameter 6 | 1359.0 | 15.893304 | 10.447270 | 1.00000 | 7.0000 | 14.0000 | 21.00000 | 72.00000 |
| Parameter 7 | 1359.0 | 46.825975 | 33.408946 | 6.00000 | 22.0000 | 38.0000 | 63.00000 | 289.00000 |
| Parameter 8 | 1359.0 | 0.996709 | 0.001869 | 0.99007 | 0.9956 | 0.9967 | 0.99782 | 1.00369 |
| Parameter 9 | 1359.0 | 3.309787 | 0.155036 | 2.74000 | 3.2100 | 3.3100 | 3.40000 | 4.01000 |
| Parameter 10 | 1359.0 | 0.658705 | 0.170667 | 0.33000 | 0.5500 | 0.6200 | 0.73000 | 2.00000 |
| Parameter 11 | 1359.0 | 10.432315 | 1.082065 | 8.40000 | 9.5000 | 10.2000 | 11.10000 | 14.90000 |
| Signal_Strength | 1359.0 | 5.623252 | 0.823578 | 3.00000 | 5.0000 | 6.0000 | 6.00000 | 8.00000 |
# univariate analysis
count = 1
plt.figure(figsize=(15,10))
for column in signal_data2.columns:
plt.subplot(3,4,count)
sns.histplot(signal_data[column])
count = count+1
count = 1
plt.figure(figsize=(15,10))
for column in signal_data2.columns:
plt.subplot(3,4,count)
sns.boxplot(signal_data[column])
count = count+1
#bivariate analysis
sns.scatterplot(data = signal_data2,y = 'Parameter 1' , x = 'Parameter 2',hue='Signal_Strength',palette='deep');
sns.scatterplot(data = signal_data2,y = 'Parameter 1' ,x = 'Parameter 3' , hue = 'Signal_Strength',palette='deep');
# multivariate analysis
sns.pairplot(signal_data2,diag_kind='kde',hue = 'Signal_Strength',palette='deep');
corr = signal_data2.corr()
plt.figure(figsize=(15,15))
sns.heatmap(corr,annot=True);
# split the data into x and y
X = signal_data2.drop('Signal_Strength',axis = 1)
Y = signal_data2['Signal_Strength']
X.shape,Y.shape
((1359, 11), (1359,))
Y.value_counts().sort_index()
3 10 4 53 5 577 6 535 7 167 8 17 Name: Signal_Strength, dtype: int64
replace = {3:0,
4:1,
5:2,
6:3,
7:4,
8:5}
Y = Y.replace(replace)
Y.value_counts().sort_index()
0 10 1 53 2 577 3 535 4 167 5 17 Name: Signal_Strength, dtype: int64
from imblearn.over_sampling import SMOTE
smt = SMOTE(random_state=42)
X_sm,Y_sm = smt.fit_resample(X,Y)
X_sm.shape,Y_sm.shape
((3462, 11), (3462,))
Y_sm.value_counts().sort_index()
0 577 1 577 2 577 3 577 4 577 5 577 Name: Signal_Strength, dtype: int64
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X_sm,Y_sm,test_size=0.3,random_state=42)
print(X_train.shape,X_test.shape)
(2423, 11) (1039, 11)
print(y_train.shape,y_test.shape)
(2423,) (1039,)
y_train.value_counts()
1 432 0 405 5 402 3 397 4 396 2 391 Name: Signal_Strength, dtype: int64
X_train.shape,y_train.shape
((2423, 11), (2423,))
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train_scale = scaler.fit_transform(X_train)
x_test_scale = scaler.fit_transform(X_test)
x_train_scale,x_test_scale
(array([[ 1.34734093, -0.93851422, 1.0003491 , ..., -0.66607665,
0.63984582, 1.03775608],
[ 1.41867401, -0.95585508, 1.27086317, ..., -1.18428876,
-0.03467819, 0.26319364],
[-1.1240663 , -0.2714215 , -1.06350926, ..., 1.00245497,
0.79906373, 2.22748519],
...,
[-0.93461291, -0.49252281, -0.94339379, ..., -0.203941 ,
-0.04786016, -0.9467208 ],
[ 0.36055008, -1.10069291, 0.20291612, ..., -1.06219291,
-0.6730474 , -0.28522851],
[ 0.54951684, -0.78981587, 1.15981022, ..., -0.82445566,
0.1855857 , 1.36871358]]),
array([[ 0.90448989, -1.07795937, 0.85389434, ..., 0.21218696,
0.83632234, -0.24995183],
[ 0.17167969, 1.00784273, -1.28607521, ..., -0.78551678,
-0.52442304, -0.79596818],
[ 1.56493639, -0.31020776, 1.1840462 , ..., 0.03091441,
0.45007057, -1.0423288 ],
...,
[ 0.61078361, -0.72638682, 0.13462267, ..., -1.64456355,
-0.20818348, -0.20294266],
[ 0.49695336, -0.9664287 , 0.26707374, ..., 0.48709664,
1.00285924, -0.05540856],
[ 0.42259622, -1.13929862, 0.40192968, ..., -0.365845 ,
1.14221734, 0.81367252]]))
from tensorflow.keras.utils import to_categorical
# Convert to "one-hot" vectors using the to_categorical function
num_classes = 6
y_train_cat = to_categorical(y_train, num_classes)
y_test_cat = to_categorical(y_test,num_classes)
y_train_cat
array([[0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 1.],
...,
[0., 0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 1.]], dtype=float32)
y_train_cat.shape
(2423, 6)
from tensorflow.keras import losses
from tensorflow.keras import optimizers
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import InputLayer,BatchNormalization,Dropout
classifier = Sequential()
classifier.add(Dense(12,input_shape=(11,),activation = 'relu'))
classifier.add(Dense(8,activation='relu'))
classifier.add(Dense(6,activation='softmax'))
opt = tf.keras.optimizers.Adam()
classifier.compile(loss=losses.categorical_crossentropy,metrics=['accuracy'],optimizer = opt)
classifier.summary()
Model: "sequential_28"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_94 (Dense) (None, 12) 144
dense_95 (Dense) (None, 8) 104
dense_96 (Dense) (None, 6) 54
=================================================================
Total params: 302
Trainable params: 302
Non-trainable params: 0
_________________________________________________________________
hist_clf = classifier.fit(x_train_scale,y_train_cat,epochs=40,validation_split=0.2,verbose=2)
Epoch 1/40 61/61 - 2s - loss: 1.7932 - accuracy: 0.2167 - val_loss: 1.7433 - val_accuracy: 0.2639 - 2s/epoch - 38ms/step Epoch 2/40 61/61 - 0s - loss: 1.6543 - accuracy: 0.3369 - val_loss: 1.6005 - val_accuracy: 0.4206 - 256ms/epoch - 4ms/step Epoch 3/40 61/61 - 0s - loss: 1.5107 - accuracy: 0.4396 - val_loss: 1.4419 - val_accuracy: 0.4557 - 264ms/epoch - 4ms/step Epoch 4/40 61/61 - 0s - loss: 1.3631 - accuracy: 0.4768 - val_loss: 1.3012 - val_accuracy: 0.4887 - 266ms/epoch - 4ms/step Epoch 5/40 61/61 - 0s - loss: 1.2467 - accuracy: 0.5057 - val_loss: 1.2062 - val_accuracy: 0.5258 - 259ms/epoch - 4ms/step Epoch 6/40 61/61 - 0s - loss: 1.1679 - accuracy: 0.5284 - val_loss: 1.1465 - val_accuracy: 0.5464 - 264ms/epoch - 4ms/step Epoch 7/40 61/61 - 0s - loss: 1.1174 - accuracy: 0.5366 - val_loss: 1.1089 - val_accuracy: 0.5608 - 261ms/epoch - 4ms/step Epoch 8/40 61/61 - 0s - loss: 1.0811 - accuracy: 0.5573 - val_loss: 1.0801 - val_accuracy: 0.5588 - 264ms/epoch - 4ms/step Epoch 9/40 61/61 - 0s - loss: 1.0531 - accuracy: 0.5583 - val_loss: 1.0608 - val_accuracy: 0.5629 - 290ms/epoch - 5ms/step Epoch 10/40 61/61 - 0s - loss: 1.0318 - accuracy: 0.5624 - val_loss: 1.0373 - val_accuracy: 0.5670 - 261ms/epoch - 4ms/step Epoch 11/40 61/61 - 0s - loss: 1.0114 - accuracy: 0.5707 - val_loss: 1.0218 - val_accuracy: 0.5835 - 281ms/epoch - 5ms/step Epoch 12/40 61/61 - 0s - loss: 0.9932 - accuracy: 0.5831 - val_loss: 1.0049 - val_accuracy: 0.5897 - 272ms/epoch - 4ms/step Epoch 13/40 61/61 - 0s - loss: 0.9765 - accuracy: 0.5955 - val_loss: 0.9861 - val_accuracy: 0.5814 - 281ms/epoch - 5ms/step Epoch 14/40 61/61 - 0s - loss: 0.9595 - accuracy: 0.5939 - val_loss: 0.9737 - val_accuracy: 0.6062 - 261ms/epoch - 4ms/step Epoch 15/40 61/61 - 0s - loss: 0.9418 - accuracy: 0.5996 - val_loss: 0.9580 - val_accuracy: 0.6000 - 283ms/epoch - 5ms/step Epoch 16/40 61/61 - 0s - loss: 0.9268 - accuracy: 0.6233 - val_loss: 0.9418 - val_accuracy: 0.6082 - 263ms/epoch - 4ms/step Epoch 17/40 61/61 - 0s - loss: 0.9142 - accuracy: 0.6223 - val_loss: 0.9318 - val_accuracy: 0.6165 - 279ms/epoch - 5ms/step Epoch 18/40 61/61 - 0s - loss: 0.9027 - accuracy: 0.6300 - val_loss: 0.9200 - val_accuracy: 0.6103 - 254ms/epoch - 4ms/step Epoch 19/40 61/61 - 0s - loss: 0.8929 - accuracy: 0.6362 - val_loss: 0.9133 - val_accuracy: 0.6144 - 222ms/epoch - 4ms/step Epoch 20/40 61/61 - 0s - loss: 0.8841 - accuracy: 0.6424 - val_loss: 0.9050 - val_accuracy: 0.6103 - 271ms/epoch - 4ms/step Epoch 21/40 61/61 - 0s - loss: 0.8748 - accuracy: 0.6440 - val_loss: 0.8925 - val_accuracy: 0.6082 - 275ms/epoch - 5ms/step Epoch 22/40 61/61 - 0s - loss: 0.8673 - accuracy: 0.6455 - val_loss: 0.8895 - val_accuracy: 0.6144 - 281ms/epoch - 5ms/step Epoch 23/40 61/61 - 0s - loss: 0.8600 - accuracy: 0.6476 - val_loss: 0.8783 - val_accuracy: 0.6186 - 280ms/epoch - 5ms/step Epoch 24/40 61/61 - 0s - loss: 0.8519 - accuracy: 0.6496 - val_loss: 0.8707 - val_accuracy: 0.6082 - 270ms/epoch - 4ms/step Epoch 25/40 61/61 - 0s - loss: 0.8452 - accuracy: 0.6548 - val_loss: 0.8624 - val_accuracy: 0.6103 - 280ms/epoch - 5ms/step Epoch 26/40 61/61 - 0s - loss: 0.8386 - accuracy: 0.6584 - val_loss: 0.8554 - val_accuracy: 0.6103 - 261ms/epoch - 4ms/step Epoch 27/40 61/61 - 0s - loss: 0.8328 - accuracy: 0.6579 - val_loss: 0.8515 - val_accuracy: 0.6144 - 282ms/epoch - 5ms/step Epoch 28/40 61/61 - 0s - loss: 0.8268 - accuracy: 0.6620 - val_loss: 0.8450 - val_accuracy: 0.6206 - 272ms/epoch - 4ms/step Epoch 29/40 61/61 - 0s - loss: 0.8198 - accuracy: 0.6641 - val_loss: 0.8421 - val_accuracy: 0.6186 - 274ms/epoch - 4ms/step Epoch 30/40 61/61 - 0s - loss: 0.8140 - accuracy: 0.6713 - val_loss: 0.8385 - val_accuracy: 0.6165 - 269ms/epoch - 4ms/step Epoch 31/40 61/61 - 0s - loss: 0.8090 - accuracy: 0.6708 - val_loss: 0.8306 - val_accuracy: 0.6309 - 276ms/epoch - 5ms/step Epoch 32/40 61/61 - 0s - loss: 0.8034 - accuracy: 0.6698 - val_loss: 0.8260 - val_accuracy: 0.6351 - 284ms/epoch - 5ms/step Epoch 33/40 61/61 - 0s - loss: 0.7988 - accuracy: 0.6708 - val_loss: 0.8219 - val_accuracy: 0.6268 - 270ms/epoch - 4ms/step Epoch 34/40 61/61 - 0s - loss: 0.7948 - accuracy: 0.6775 - val_loss: 0.8151 - val_accuracy: 0.6289 - 282ms/epoch - 5ms/step Epoch 35/40 61/61 - 0s - loss: 0.7875 - accuracy: 0.6806 - val_loss: 0.8105 - val_accuracy: 0.6371 - 286ms/epoch - 5ms/step Epoch 36/40 61/61 - 0s - loss: 0.7825 - accuracy: 0.6801 - val_loss: 0.8074 - val_accuracy: 0.6371 - 270ms/epoch - 4ms/step Epoch 37/40 61/61 - 0s - loss: 0.7775 - accuracy: 0.6904 - val_loss: 0.8010 - val_accuracy: 0.6392 - 282ms/epoch - 5ms/step Epoch 38/40 61/61 - 0s - loss: 0.7733 - accuracy: 0.6883 - val_loss: 0.7978 - val_accuracy: 0.6433 - 261ms/epoch - 4ms/step Epoch 39/40 61/61 - 0s - loss: 0.7688 - accuracy: 0.6930 - val_loss: 0.7947 - val_accuracy: 0.6454 - 281ms/epoch - 5ms/step Epoch 40/40 61/61 - 0s - loss: 0.7647 - accuracy: 0.6956 - val_loss: 0.7897 - val_accuracy: 0.6536 - 275ms/epoch - 5ms/step
def box_acc(hist):
a = pd.DataFrame({'acc' : hist.history['accuracy']})
b = pd.DataFrame({'val_acc' : hist.history['val_accuracy']})
ab = pd.concat([a,b],axis=1)
ab.boxplot()
def box_loss(hist):
a = pd.DataFrame({'loss' : hist.history['loss']})
b = pd.DataFrame({'val_loss' : hist.history['val_loss']})
ab = pd.concat([a,b],axis=1)
ab.boxplot()
def loss_plot(history):
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training','validation'],loc='best')
plt.show()
def accuracy_plot(history):
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(['training','validation'],loc='best')
plt.show()
def metric_func(classifier):
y_pred=classifier.predict(x_test_scale)
y_pred_final=[]
for i in y_pred:
y_pred_final.append(np.argmax(i))
print("Classification report as : \n" , classification_report(y_test,y_pred_final))
cm = confusion_matrix(y_test,y_pred_final)
plt.figure(figsize=(10,10))
print("The confusion matrix as : \n")
sns.heatmap(cm,annot=True,fmt='.2f')
box_loss(hist_clf)
loss_plot(hist_clf)
box_acc(hist_clf)
accuracy_plot(hist_clf)
from sklearn.metrics import classification_report,confusion_matrix
clf_evat = classifier.evaluate(x_test_scale,y_test_cat)
33/33 [==============================] - 0s 2ms/step - loss: 0.8450 - accuracy: 0.6420
By the above result we can specify its a clear case of underfit model,I need to train my model little more complex
metric_func(classifier)
33/33 [==============================] - 0s 2ms/step
Classification report as :
precision recall f1-score support
0 0.87 0.97 0.92 172
1 0.60 0.80 0.69 145
2 0.53 0.47 0.50 186
3 0.47 0.32 0.38 180
4 0.59 0.50 0.54 181
5 0.69 0.85 0.76 175
accuracy 0.64 1039
macro avg 0.63 0.65 0.63 1039
weighted avg 0.62 0.64 0.63 1039
The confusion matrix as :
classifier_1 = Sequential()
classifier_1.add(Dense(12,input_shape=(11,),activation = 'relu'))
classifier_1.add(Dense(8,activation='relu'))
classifier_1.add(Dense(6,activation='softmax'))
opt = tf.keras.optimizers.Adam()
classifier_1.compile(loss=losses.categorical_crossentropy,metrics=['accuracy'],optimizer = opt)
classifier_1.summary()
Model: "sequential_29"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_97 (Dense) (None, 12) 144
dense_98 (Dense) (None, 8) 104
dense_99 (Dense) (None, 6) 54
=================================================================
Total params: 302
Trainable params: 302
Non-trainable params: 0
_________________________________________________________________
hist_clf_1 = classifier_1.fit(x_train_scale,y_train_cat,validation_split=0.2,epochs=100,verbose=2)
Epoch 1/100 61/61 - 2s - loss: 1.7025 - accuracy: 0.2508 - val_loss: 1.6538 - val_accuracy: 0.3113 - 2s/epoch - 38ms/step Epoch 2/100 61/61 - 0s - loss: 1.5964 - accuracy: 0.3385 - val_loss: 1.5386 - val_accuracy: 0.4082 - 265ms/epoch - 4ms/step Epoch 3/100 61/61 - 0s - loss: 1.4928 - accuracy: 0.4133 - val_loss: 1.4274 - val_accuracy: 0.4784 - 275ms/epoch - 5ms/step Epoch 4/100 61/61 - 0s - loss: 1.3841 - accuracy: 0.4572 - val_loss: 1.3157 - val_accuracy: 0.5072 - 304ms/epoch - 5ms/step Epoch 5/100 61/61 - 0s - loss: 1.2793 - accuracy: 0.4809 - val_loss: 1.2173 - val_accuracy: 0.5175 - 269ms/epoch - 4ms/step Epoch 6/100 61/61 - 0s - loss: 1.1953 - accuracy: 0.5046 - val_loss: 1.1454 - val_accuracy: 0.5402 - 290ms/epoch - 5ms/step Epoch 7/100 61/61 - 0s - loss: 1.1359 - accuracy: 0.5279 - val_loss: 1.1019 - val_accuracy: 0.5670 - 284ms/epoch - 5ms/step Epoch 8/100 61/61 - 0s - loss: 1.0935 - accuracy: 0.5392 - val_loss: 1.0684 - val_accuracy: 0.5670 - 275ms/epoch - 5ms/step Epoch 9/100 61/61 - 0s - loss: 1.0616 - accuracy: 0.5552 - val_loss: 1.0487 - val_accuracy: 0.5588 - 278ms/epoch - 5ms/step Epoch 10/100 61/61 - 0s - loss: 1.0379 - accuracy: 0.5645 - val_loss: 1.0254 - val_accuracy: 0.5567 - 290ms/epoch - 5ms/step Epoch 11/100 61/61 - 0s - loss: 1.0174 - accuracy: 0.5820 - val_loss: 1.0085 - val_accuracy: 0.5464 - 275ms/epoch - 5ms/step Epoch 12/100 61/61 - 0s - loss: 1.0003 - accuracy: 0.5882 - val_loss: 0.9985 - val_accuracy: 0.5670 - 278ms/epoch - 5ms/step Epoch 13/100 61/61 - 0s - loss: 0.9861 - accuracy: 0.5872 - val_loss: 0.9846 - val_accuracy: 0.5670 - 287ms/epoch - 5ms/step Epoch 14/100 61/61 - 0s - loss: 0.9727 - accuracy: 0.5949 - val_loss: 0.9779 - val_accuracy: 0.5629 - 275ms/epoch - 5ms/step Epoch 15/100 61/61 - 0s - loss: 0.9607 - accuracy: 0.5991 - val_loss: 0.9659 - val_accuracy: 0.5670 - 279ms/epoch - 5ms/step Epoch 16/100 61/61 - 0s - loss: 0.9511 - accuracy: 0.6063 - val_loss: 0.9570 - val_accuracy: 0.5732 - 272ms/epoch - 4ms/step Epoch 17/100 61/61 - 0s - loss: 0.9423 - accuracy: 0.6125 - val_loss: 0.9499 - val_accuracy: 0.5773 - 284ms/epoch - 5ms/step Epoch 18/100 61/61 - 0s - loss: 0.9346 - accuracy: 0.6151 - val_loss: 0.9431 - val_accuracy: 0.5773 - 273ms/epoch - 4ms/step Epoch 19/100 61/61 - 0s - loss: 0.9277 - accuracy: 0.6130 - val_loss: 0.9371 - val_accuracy: 0.5794 - 272ms/epoch - 4ms/step Epoch 20/100 61/61 - 0s - loss: 0.9210 - accuracy: 0.6228 - val_loss: 0.9364 - val_accuracy: 0.5856 - 280ms/epoch - 5ms/step Epoch 21/100 61/61 - 0s - loss: 0.9145 - accuracy: 0.6264 - val_loss: 0.9245 - val_accuracy: 0.5856 - 276ms/epoch - 5ms/step Epoch 22/100 61/61 - 0s - loss: 0.9087 - accuracy: 0.6269 - val_loss: 0.9248 - val_accuracy: 0.5918 - 275ms/epoch - 5ms/step Epoch 23/100 61/61 - 0s - loss: 0.9027 - accuracy: 0.6264 - val_loss: 0.9163 - val_accuracy: 0.5918 - 274ms/epoch - 4ms/step Epoch 24/100 61/61 - 0s - loss: 0.8952 - accuracy: 0.6336 - val_loss: 0.9112 - val_accuracy: 0.6041 - 281ms/epoch - 5ms/step Epoch 25/100 61/61 - 0s - loss: 0.8919 - accuracy: 0.6316 - val_loss: 0.9039 - val_accuracy: 0.6103 - 291ms/epoch - 5ms/step Epoch 26/100 61/61 - 0s - loss: 0.8858 - accuracy: 0.6280 - val_loss: 0.8986 - val_accuracy: 0.6082 - 274ms/epoch - 4ms/step Epoch 27/100 61/61 - 0s - loss: 0.8822 - accuracy: 0.6280 - val_loss: 0.8935 - val_accuracy: 0.6165 - 270ms/epoch - 4ms/step Epoch 28/100 61/61 - 0s - loss: 0.8749 - accuracy: 0.6352 - val_loss: 0.8898 - val_accuracy: 0.6206 - 265ms/epoch - 4ms/step Epoch 29/100 61/61 - 0s - loss: 0.8679 - accuracy: 0.6404 - val_loss: 0.8859 - val_accuracy: 0.6186 - 255ms/epoch - 4ms/step Epoch 30/100 61/61 - 0s - loss: 0.8630 - accuracy: 0.6393 - val_loss: 0.8842 - val_accuracy: 0.6206 - 250ms/epoch - 4ms/step Epoch 31/100 61/61 - 0s - loss: 0.8577 - accuracy: 0.6414 - val_loss: 0.8733 - val_accuracy: 0.6268 - 243ms/epoch - 4ms/step Epoch 32/100 61/61 - 0s - loss: 0.8511 - accuracy: 0.6465 - val_loss: 0.8674 - val_accuracy: 0.6309 - 241ms/epoch - 4ms/step Epoch 33/100 61/61 - 0s - loss: 0.8462 - accuracy: 0.6481 - val_loss: 0.8620 - val_accuracy: 0.6309 - 306ms/epoch - 5ms/step Epoch 34/100 61/61 - 0s - loss: 0.8422 - accuracy: 0.6538 - val_loss: 0.8547 - val_accuracy: 0.6454 - 272ms/epoch - 4ms/step Epoch 35/100 61/61 - 0s - loss: 0.8354 - accuracy: 0.6584 - val_loss: 0.8538 - val_accuracy: 0.6247 - 281ms/epoch - 5ms/step Epoch 36/100 61/61 - 0s - loss: 0.8296 - accuracy: 0.6574 - val_loss: 0.8496 - val_accuracy: 0.6371 - 274ms/epoch - 4ms/step Epoch 37/100 61/61 - 0s - loss: 0.8247 - accuracy: 0.6641 - val_loss: 0.8449 - val_accuracy: 0.6433 - 271ms/epoch - 4ms/step Epoch 38/100 61/61 - 0s - loss: 0.8208 - accuracy: 0.6641 - val_loss: 0.8391 - val_accuracy: 0.6351 - 276ms/epoch - 5ms/step Epoch 39/100 61/61 - 0s - loss: 0.8161 - accuracy: 0.6703 - val_loss: 0.8380 - val_accuracy: 0.6371 - 273ms/epoch - 4ms/step Epoch 40/100 61/61 - 0s - loss: 0.8112 - accuracy: 0.6723 - val_loss: 0.8342 - val_accuracy: 0.6392 - 238ms/epoch - 4ms/step Epoch 41/100 61/61 - 0s - loss: 0.8067 - accuracy: 0.6749 - val_loss: 0.8305 - val_accuracy: 0.6433 - 240ms/epoch - 4ms/step Epoch 42/100 61/61 - 0s - loss: 0.8004 - accuracy: 0.6770 - val_loss: 0.8236 - val_accuracy: 0.6454 - 239ms/epoch - 4ms/step Epoch 43/100 61/61 - 0s - loss: 0.7971 - accuracy: 0.6816 - val_loss: 0.8201 - val_accuracy: 0.6474 - 268ms/epoch - 4ms/step Epoch 44/100 61/61 - 0s - loss: 0.7909 - accuracy: 0.6775 - val_loss: 0.8153 - val_accuracy: 0.6536 - 259ms/epoch - 4ms/step Epoch 45/100 61/61 - 0s - loss: 0.7876 - accuracy: 0.6796 - val_loss: 0.8109 - val_accuracy: 0.6557 - 259ms/epoch - 4ms/step Epoch 46/100 61/61 - 0s - loss: 0.7818 - accuracy: 0.6904 - val_loss: 0.8115 - val_accuracy: 0.6495 - 272ms/epoch - 4ms/step Epoch 47/100 61/61 - 0s - loss: 0.7789 - accuracy: 0.6935 - val_loss: 0.8053 - val_accuracy: 0.6577 - 285ms/epoch - 5ms/step Epoch 48/100 61/61 - 0s - loss: 0.7740 - accuracy: 0.6925 - val_loss: 0.8023 - val_accuracy: 0.6577 - 278ms/epoch - 5ms/step Epoch 49/100 61/61 - 0s - loss: 0.7690 - accuracy: 0.6940 - val_loss: 0.7978 - val_accuracy: 0.6598 - 275ms/epoch - 5ms/step Epoch 50/100 61/61 - 0s - loss: 0.7650 - accuracy: 0.6981 - val_loss: 0.7934 - val_accuracy: 0.6577 - 277ms/epoch - 5ms/step Epoch 51/100 61/61 - 0s - loss: 0.7586 - accuracy: 0.7028 - val_loss: 0.7917 - val_accuracy: 0.6557 - 277ms/epoch - 5ms/step Epoch 52/100 61/61 - 0s - loss: 0.7552 - accuracy: 0.7064 - val_loss: 0.7874 - val_accuracy: 0.6722 - 277ms/epoch - 5ms/step Epoch 53/100 61/61 - 0s - loss: 0.7495 - accuracy: 0.7059 - val_loss: 0.7819 - val_accuracy: 0.6701 - 277ms/epoch - 5ms/step Epoch 54/100 61/61 - 0s - loss: 0.7456 - accuracy: 0.7033 - val_loss: 0.7806 - val_accuracy: 0.6701 - 259ms/epoch - 4ms/step Epoch 55/100 61/61 - 0s - loss: 0.7401 - accuracy: 0.7085 - val_loss: 0.7797 - val_accuracy: 0.6619 - 275ms/epoch - 5ms/step Epoch 56/100 61/61 - 0s - loss: 0.7356 - accuracy: 0.7110 - val_loss: 0.7752 - val_accuracy: 0.6722 - 284ms/epoch - 5ms/step Epoch 57/100 61/61 - 0s - loss: 0.7324 - accuracy: 0.7147 - val_loss: 0.7712 - val_accuracy: 0.6660 - 283ms/epoch - 5ms/step Epoch 58/100 61/61 - 0s - loss: 0.7272 - accuracy: 0.7147 - val_loss: 0.7669 - val_accuracy: 0.6701 - 278ms/epoch - 5ms/step Epoch 59/100 61/61 - 0s - loss: 0.7230 - accuracy: 0.7172 - val_loss: 0.7652 - val_accuracy: 0.6701 - 279ms/epoch - 5ms/step Epoch 60/100 61/61 - 0s - loss: 0.7193 - accuracy: 0.7239 - val_loss: 0.7642 - val_accuracy: 0.6660 - 275ms/epoch - 5ms/step Epoch 61/100 61/61 - 0s - loss: 0.7159 - accuracy: 0.7276 - val_loss: 0.7626 - val_accuracy: 0.6784 - 274ms/epoch - 4ms/step Epoch 62/100 61/61 - 0s - loss: 0.7124 - accuracy: 0.7286 - val_loss: 0.7581 - val_accuracy: 0.6845 - 275ms/epoch - 5ms/step Epoch 63/100 61/61 - 0s - loss: 0.7083 - accuracy: 0.7229 - val_loss: 0.7560 - val_accuracy: 0.6804 - 279ms/epoch - 5ms/step Epoch 64/100 61/61 - 0s - loss: 0.7046 - accuracy: 0.7312 - val_loss: 0.7527 - val_accuracy: 0.6784 - 274ms/epoch - 4ms/step Epoch 65/100 61/61 - 0s - loss: 0.7007 - accuracy: 0.7332 - val_loss: 0.7505 - val_accuracy: 0.6722 - 280ms/epoch - 5ms/step Epoch 66/100 61/61 - 0s - loss: 0.6974 - accuracy: 0.7353 - val_loss: 0.7533 - val_accuracy: 0.6825 - 257ms/epoch - 4ms/step Epoch 67/100 61/61 - 0s - loss: 0.6939 - accuracy: 0.7363 - val_loss: 0.7527 - val_accuracy: 0.6742 - 278ms/epoch - 5ms/step Epoch 68/100 61/61 - 0s - loss: 0.6915 - accuracy: 0.7379 - val_loss: 0.7440 - val_accuracy: 0.6907 - 278ms/epoch - 5ms/step Epoch 69/100 61/61 - 0s - loss: 0.6876 - accuracy: 0.7446 - val_loss: 0.7420 - val_accuracy: 0.6887 - 263ms/epoch - 4ms/step Epoch 70/100 61/61 - 0s - loss: 0.6853 - accuracy: 0.7394 - val_loss: 0.7406 - val_accuracy: 0.6784 - 277ms/epoch - 5ms/step Epoch 71/100 61/61 - 0s - loss: 0.6822 - accuracy: 0.7420 - val_loss: 0.7378 - val_accuracy: 0.6907 - 272ms/epoch - 4ms/step Epoch 72/100 61/61 - 0s - loss: 0.6806 - accuracy: 0.7405 - val_loss: 0.7382 - val_accuracy: 0.6866 - 263ms/epoch - 4ms/step Epoch 73/100 61/61 - 0s - loss: 0.6767 - accuracy: 0.7436 - val_loss: 0.7417 - val_accuracy: 0.6928 - 268ms/epoch - 4ms/step Epoch 74/100 61/61 - 0s - loss: 0.6744 - accuracy: 0.7410 - val_loss: 0.7326 - val_accuracy: 0.7093 - 275ms/epoch - 5ms/step Epoch 75/100 61/61 - 0s - loss: 0.6711 - accuracy: 0.7497 - val_loss: 0.7316 - val_accuracy: 0.6887 - 259ms/epoch - 4ms/step Epoch 76/100 61/61 - 0s - loss: 0.6702 - accuracy: 0.7394 - val_loss: 0.7306 - val_accuracy: 0.6990 - 275ms/epoch - 5ms/step Epoch 77/100 61/61 - 0s - loss: 0.6660 - accuracy: 0.7497 - val_loss: 0.7284 - val_accuracy: 0.7010 - 279ms/epoch - 5ms/step Epoch 78/100 61/61 - 0s - loss: 0.6630 - accuracy: 0.7451 - val_loss: 0.7274 - val_accuracy: 0.7072 - 282ms/epoch - 5ms/step Epoch 79/100 61/61 - 0s - loss: 0.6610 - accuracy: 0.7482 - val_loss: 0.7256 - val_accuracy: 0.6990 - 264ms/epoch - 4ms/step Epoch 80/100 61/61 - 0s - loss: 0.6587 - accuracy: 0.7446 - val_loss: 0.7270 - val_accuracy: 0.6948 - 272ms/epoch - 4ms/step Epoch 81/100 61/61 - 0s - loss: 0.6569 - accuracy: 0.7513 - val_loss: 0.7240 - val_accuracy: 0.7031 - 277ms/epoch - 5ms/step Epoch 82/100 61/61 - 0s - loss: 0.6556 - accuracy: 0.7508 - val_loss: 0.7220 - val_accuracy: 0.6948 - 272ms/epoch - 4ms/step Epoch 83/100 61/61 - 0s - loss: 0.6523 - accuracy: 0.7539 - val_loss: 0.7210 - val_accuracy: 0.7010 - 279ms/epoch - 5ms/step Epoch 84/100 61/61 - 0s - loss: 0.6496 - accuracy: 0.7523 - val_loss: 0.7228 - val_accuracy: 0.7010 - 259ms/epoch - 4ms/step Epoch 85/100 61/61 - 0s - loss: 0.6483 - accuracy: 0.7554 - val_loss: 0.7168 - val_accuracy: 0.7031 - 282ms/epoch - 5ms/step Epoch 86/100 61/61 - 0s - loss: 0.6470 - accuracy: 0.7564 - val_loss: 0.7174 - val_accuracy: 0.7010 - 272ms/epoch - 4ms/step Epoch 87/100 61/61 - 0s - loss: 0.6443 - accuracy: 0.7539 - val_loss: 0.7149 - val_accuracy: 0.7031 - 248ms/epoch - 4ms/step Epoch 88/100 61/61 - 0s - loss: 0.6421 - accuracy: 0.7601 - val_loss: 0.7178 - val_accuracy: 0.7031 - 277ms/epoch - 5ms/step Epoch 89/100 61/61 - 0s - loss: 0.6418 - accuracy: 0.7518 - val_loss: 0.7130 - val_accuracy: 0.7072 - 278ms/epoch - 5ms/step Epoch 90/100 61/61 - 0s - loss: 0.6390 - accuracy: 0.7523 - val_loss: 0.7140 - val_accuracy: 0.7093 - 253ms/epoch - 4ms/step Epoch 91/100 61/61 - 0s - loss: 0.6392 - accuracy: 0.7554 - val_loss: 0.7139 - val_accuracy: 0.7113 - 237ms/epoch - 4ms/step Epoch 92/100 61/61 - 0s - loss: 0.6351 - accuracy: 0.7632 - val_loss: 0.7117 - val_accuracy: 0.7010 - 272ms/epoch - 4ms/step Epoch 93/100 61/61 - 0s - loss: 0.6337 - accuracy: 0.7626 - val_loss: 0.7116 - val_accuracy: 0.7052 - 279ms/epoch - 5ms/step Epoch 94/100 61/61 - 0s - loss: 0.6338 - accuracy: 0.7657 - val_loss: 0.7103 - val_accuracy: 0.7093 - 275ms/epoch - 5ms/step Epoch 95/100 61/61 - 0s - loss: 0.6313 - accuracy: 0.7647 - val_loss: 0.7076 - val_accuracy: 0.7093 - 276ms/epoch - 5ms/step Epoch 96/100 61/61 - 0s - loss: 0.6301 - accuracy: 0.7590 - val_loss: 0.7064 - val_accuracy: 0.7134 - 288ms/epoch - 5ms/step Epoch 97/100 61/61 - 0s - loss: 0.6293 - accuracy: 0.7647 - val_loss: 0.7079 - val_accuracy: 0.7052 - 259ms/epoch - 4ms/step Epoch 98/100 61/61 - 0s - loss: 0.6277 - accuracy: 0.7611 - val_loss: 0.7035 - val_accuracy: 0.7134 - 277ms/epoch - 5ms/step Epoch 99/100 61/61 - 0s - loss: 0.6252 - accuracy: 0.7663 - val_loss: 0.7029 - val_accuracy: 0.7113 - 283ms/epoch - 5ms/step Epoch 100/100 61/61 - 0s - loss: 0.6247 - accuracy: 0.7663 - val_loss: 0.7059 - val_accuracy: 0.7134 - 273ms/epoch - 4ms/step
loss_plot(hist_clf_1),accuracy_plot(hist_clf_1)
(None, None)
classifier_1.evaluate(x_test_scale,y_test_cat)
33/33 [==============================] - 0s 2ms/step - loss: 0.7122 - accuracy: 0.7113
[0.712177038192749, 0.7112608551979065]
metric_func(classifier_1)
33/33 [==============================] - 0s 2ms/step
Classification report as :
precision recall f1-score support
0 0.94 1.00 0.97 172
1 0.66 0.85 0.75 145
2 0.57 0.51 0.54 186
3 0.47 0.35 0.40 180
4 0.68 0.66 0.67 181
5 0.86 0.95 0.91 175
accuracy 0.71 1039
macro avg 0.70 0.72 0.70 1039
weighted avg 0.69 0.71 0.70 1039
The confusion matrix as :
def evaluate(classifier):
eval = classifier.evaluate(x_test_scale,y_test_cat)
print("loss and accuracy are : " ,eval)
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau,EarlyStopping
earl_stp = EarlyStopping(patience=5)
def model_dev(a=44,bn = False,bn1 = False,bn2=False,do=False,do1 = False,do2=False,add1 = False,add2 = False,add3 = False,add4 = False,b=None,c=None,d=None,e=None,act_func = 'relu',wt1 = 'glorot_uniform',opt = 'adam',epochs = 100,batch_size = 32,es = earl_stp):
model = Sequential()
model.add(Dense(a,input_shape=(11,),activation = act_func,kernel_initializer=wt1))
if bn :
model.add(BatchNormalization())
if do :
model.add(Dropout(0.2))
if add1 :
model.add(Dense(b,activation = act_func,kernel_initializer=wt1))
if bn1 :
model.add(BatchNormalization())
if do1 :
model.add(Dropout(0.1))
if add2 :
model.add(Dense(c,activation = act_func,kernel_initializer=wt1))
if bn2 :
model.add(BatchNormalization())
if do2 :
model.add(Dropout(0.2))
if add3 :
model.add(Dense(d,activation = act_func,kernel_initializer=wt1))
if add4 :
model.add(Dense(e,activation = act_func,kernel_initializer=wt1))
model.add(Dense(6,activation='softmax'))
model.compile(optimizer=opt,loss=tf.keras.losses.categorical_crossentropy,metrics=['accuracy'])
hist = model.fit(x_train_scale,y_train_cat,validation_split=0.2,epochs=epochs,callbacks=earl_stp,batch_size = batch_size,verbose = 0)
print("*-*"*30)
print("\n")
evaluate(model)
loss_plot(hist)
accuracy_plot(hist)
metric_func(model)
model_dev()
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 3ms/step - loss: 0.6678 - accuracy: 0.7478 loss and accuracy are : [0.6678460836410522, 0.7478344440460205]
33/33 [==============================] - 0s 2ms/step
Classification report as :
precision recall f1-score support
0 0.92 1.00 0.96 172
1 0.75 0.94 0.83 145
2 0.65 0.54 0.59 186
3 0.51 0.42 0.46 180
4 0.71 0.67 0.69 181
5 0.85 0.98 0.91 175
accuracy 0.75 1039
macro avg 0.73 0.76 0.74 1039
weighted avg 0.73 0.75 0.74 1039
The confusion matrix as :
model_dev(a=44,bn=True)
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 2ms/step - loss: 0.6505 - accuracy: 0.7584 loss and accuracy are : [0.6505299210548401, 0.7584215402603149]
33/33 [==============================] - 0s 2ms/step
Classification report as :
precision recall f1-score support
0 0.94 1.00 0.97 172
1 0.77 0.94 0.85 145
2 0.66 0.55 0.60 186
3 0.51 0.39 0.44 180
4 0.71 0.73 0.72 181
5 0.89 0.98 0.93 175
accuracy 0.76 1039
macro avg 0.75 0.77 0.75 1039
weighted avg 0.74 0.76 0.75 1039
The confusion matrix as :
model_dev(a=44,bn=True,do=True)
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 5ms/step - loss: 0.6769 - accuracy: 0.7526 loss and accuracy are : [0.6768950819969177, 0.752646803855896]
33/33 [==============================] - 0s 3ms/step
Classification report as :
precision recall f1-score support
0 0.91 0.98 0.94 172
1 0.71 0.92 0.80 145
2 0.66 0.53 0.59 186
3 0.52 0.43 0.47 180
4 0.74 0.72 0.73 181
5 0.91 0.98 0.95 175
accuracy 0.75 1039
macro avg 0.74 0.76 0.75 1039
weighted avg 0.74 0.75 0.74 1039
The confusion matrix as :
model_dev(bn=True,do=True,add1=True,b=44)
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 3ms/step - loss: 0.6476 - accuracy: 0.7565 loss and accuracy are : [0.647642195224762, 0.7564966082572937]
33/33 [==============================] - 0s 3ms/step
Classification report as :
precision recall f1-score support
0 0.92 1.00 0.96 172
1 0.73 0.90 0.81 145
2 0.65 0.55 0.59 186
3 0.53 0.41 0.46 180
4 0.76 0.74 0.75 181
5 0.86 1.00 0.93 175
accuracy 0.76 1039
macro avg 0.74 0.77 0.75 1039
weighted avg 0.74 0.76 0.74 1039
The confusion matrix as :
model_dev(bn=True,do=True,add1=True,b=44,bn1=True)
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 3ms/step - loss: 0.5841 - accuracy: 0.7796 loss and accuracy are : [0.5841214656829834, 0.7795957922935486]
33/33 [==============================] - 0s 2ms/step
Classification report as :
precision recall f1-score support
0 0.92 1.00 0.96 172
1 0.80 0.97 0.88 145
2 0.65 0.60 0.62 186
3 0.55 0.38 0.45 180
4 0.75 0.80 0.77 181
5 0.90 1.00 0.95 175
accuracy 0.78 1039
macro avg 0.76 0.79 0.77 1039
weighted avg 0.76 0.78 0.77 1039
The confusion matrix as :
model_dev(bn=True,do=True,add1=True,b=30,bn1=True,add2=True,c=20)
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 4ms/step - loss: 0.5926 - accuracy: 0.7594 loss and accuracy are : [0.5925556421279907, 0.759384036064148]
33/33 [==============================] - 1s 7ms/step
Classification report as :
precision recall f1-score support
0 0.91 1.00 0.96 172
1 0.76 0.92 0.83 145
2 0.64 0.53 0.58 186
3 0.50 0.38 0.44 180
4 0.73 0.78 0.76 181
5 0.91 0.99 0.95 175
accuracy 0.76 1039
macro avg 0.74 0.77 0.75 1039
weighted avg 0.74 0.76 0.75 1039
The confusion matrix as :
model_dev(bn=True,do=True,add1=True,b=44,bn1=True,add2=True,c=44)
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 3ms/step - loss: 0.5811 - accuracy: 0.7632 loss and accuracy are : [0.5811459422111511, 0.7632339000701904]
33/33 [==============================] - 0s 2ms/step
Classification report as :
precision recall f1-score support
0 0.95 1.00 0.97 172
1 0.71 0.98 0.82 145
2 0.70 0.38 0.49 186
3 0.51 0.47 0.49 180
4 0.75 0.83 0.79 181
5 0.91 0.99 0.95 175
accuracy 0.76 1039
macro avg 0.75 0.78 0.75 1039
weighted avg 0.75 0.76 0.75 1039
The confusion matrix as :
model_dev(bn=True,do=True,add1=True,b=44,bn1=True,add2=True,c=44,add3=True,d=44)
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 3ms/step - loss: 0.6450 - accuracy: 0.7517 loss and accuracy are : [0.645025908946991, 0.751684308052063]
33/33 [==============================] - 0s 3ms/step
Classification report as :
precision recall f1-score support
0 0.94 1.00 0.97 172
1 0.74 0.96 0.83 145
2 0.65 0.54 0.59 186
3 0.56 0.36 0.44 180
4 0.72 0.72 0.72 181
5 0.81 1.00 0.90 175
accuracy 0.75 1039
macro avg 0.74 0.76 0.74 1039
weighted avg 0.73 0.75 0.73 1039
The confusion matrix as :
model_dev(bn=True,do=True,add1=True,b=44,bn1=True,add2=True,c=44,wt1='he_uniform')
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 3ms/step - loss: 0.6202 - accuracy: 0.7411 loss and accuracy are : [0.6202229857444763, 0.7410972118377686]
33/33 [==============================] - 0s 2ms/step
Classification report as :
precision recall f1-score support
0 0.92 1.00 0.96 172
1 0.71 0.90 0.79 145
2 0.62 0.52 0.56 186
3 0.51 0.40 0.45 180
4 0.76 0.70 0.73 181
5 0.84 0.99 0.91 175
accuracy 0.74 1039
macro avg 0.73 0.75 0.73 1039
weighted avg 0.72 0.74 0.73 1039
The confusion matrix as :
opt1 = tf.keras.optimizers.RMSprop(rho=0.99)
model_dev(bn=True,do=True,add1=True,b=44,bn1=True,add2=True,c=44,wt1='he_uniform',opt=opt1)
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 3ms/step - loss: 0.6239 - accuracy: 0.7526 loss and accuracy are : [0.6239337921142578, 0.752646803855896]
33/33 [==============================] - 0s 3ms/step
Classification report as :
precision recall f1-score support
0 0.91 1.00 0.95 172
1 0.79 0.94 0.86 145
2 0.64 0.52 0.57 186
3 0.50 0.37 0.42 180
4 0.71 0.77 0.74 181
5 0.86 0.99 0.92 175
accuracy 0.75 1039
macro avg 0.74 0.76 0.74 1039
weighted avg 0.73 0.75 0.74 1039
The confusion matrix as :
model_dev(a=66,bn=True,do=True,add1=True,b=33,bn1=True,add2=True,c=22,wt1='he_uniform',opt=tf.keras.optimizers.RMSprop(momentum=0.1))
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 3ms/step - loss: 0.5936 - accuracy: 0.7623 loss and accuracy are : [0.5936322212219238, 0.7622714042663574]
33/33 [==============================] - 0s 2ms/step
Classification report as :
precision recall f1-score support
0 0.95 1.00 0.97 172
1 0.77 0.93 0.84 145
2 0.66 0.56 0.61 186
3 0.52 0.40 0.45 180
4 0.75 0.75 0.75 181
5 0.85 0.99 0.92 175
accuracy 0.76 1039
macro avg 0.75 0.77 0.76 1039
weighted avg 0.75 0.76 0.75 1039
The confusion matrix as :
model_dev(bn=True,do=True,add1=True,b=44,bn1=True)
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 33/33 [==============================] - 0s 3ms/step - loss: 0.6084 - accuracy: 0.7623 loss and accuracy are : [0.608428955078125, 0.7622714042663574]
33/33 [==============================] - 0s 2ms/step
Classification report as :
precision recall f1-score support
0 0.93 1.00 0.97 172
1 0.72 0.96 0.82 145
2 0.65 0.56 0.61 186
3 0.52 0.38 0.44 180
4 0.74 0.73 0.74 181
5 0.92 1.00 0.96 175
accuracy 0.76 1039
macro avg 0.75 0.77 0.75 1039
weighted avg 0.75 0.76 0.75 1039
The confusion matrix as :
model_final = Sequential()
model_final.add(Dense(44,input_shape=(11,),activation = 'relu'))
model_final.add(BatchNormalization())
model_final.add(Dropout(0.2))
model_final.add(Dense(44,activation = 'relu'))
model_final.add(BatchNormalization())
model_final.add(Dense(6,activation='softmax'))
adam = tf.keras.optimizers.Adam(amsgrad=True,use_ema=True)
model_final.compile(optimizer='adam',loss=tf.keras.losses.categorical_crossentropy,metrics=['accuracy'])
history_final = model_final.fit(x_train_scale,y_train_cat,validation_split=0.2,epochs=100,callbacks=earl_stp,batch_size = 32,verbose = 0)
evaluate(model_final)
33/33 [==============================] - 0s 3ms/step - loss: 0.5746 - accuracy: 0.7690 loss and accuracy are : [0.5746031403541565, 0.7690086364746094]
box_loss(history_final)
box_acc(history_final)
loss_plot(history_final)
accuracy_plot(history_final)
metric_func(model_final)
33/33 [==============================] - 0s 2ms/step
Classification report as :
precision recall f1-score support
0 0.95 1.00 0.97 172
1 0.76 0.94 0.84 145
2 0.64 0.55 0.59 186
3 0.50 0.39 0.44 180
4 0.78 0.78 0.78 181
5 0.90 1.00 0.95 175
accuracy 0.77 1039
macro avg 0.76 0.78 0.76 1039
weighted avg 0.75 0.77 0.76 1039
The confusion matrix as :
signal_data3 = signal_data2.sample(2000,replace=True,ignore_index=True)
signal_data3.shape
(2000, 12)
# split the data into x and y
X1 = signal_data3.drop('Signal_Strength',axis = 1)
Y1 = signal_data3['Signal_Strength']
X1.shape,Y1.shape
((2000, 11), (2000,))
rep = {3:0,
4:1,
5:2,
6:3,
7:4,
8:5
}
Y1 = Y1.replace(rep)
Y1.value_counts()
2 840 3 815 4 229 1 83 5 21 0 12 Name: Signal_Strength, dtype: int64
X1_sm,Y1_sm = SMOTE(random_state=42).fit_resample(X1,Y1)
X1.shape,Y1.shape,Y1_sm.value_counts()
((2000, 11), (2000,), 1 840 3 840 2 840 4 840 5 840 0 840 Name: Signal_Strength, dtype: int64)
X_train,X_test,y_train,y_test = train_test_split(X1_sm,Y1_sm,test_size=0.3,random_state=42)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train_scale = scaler.fit_transform(X_train)
x_test_scale = scaler.fit_transform(X_test)
y_train_cat = to_categorical(y_train,6)
y_test_cat = to_categorical(y_test,6)
model_dev(a=66,bn=True,do=True,add1=True,b=33,bn1=True,add2=True,c=22,wt1='he_uniform',opt=tf.keras.optimizers.Adam(amsgrad=True,use_ema=True))
*-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 48/48 [==============================] - 0s 3ms/step - loss: 0.3777 - accuracy: 0.8519 loss and accuracy are : [0.37770581245422363, 0.8518518805503845]
48/48 [==============================] - 0s 3ms/step
Classification report as :
precision recall f1-score support
0 0.98 1.00 0.99 244
1 0.87 0.96 0.91 245
2 0.76 0.69 0.73 262
3 0.69 0.59 0.63 260
4 0.83 0.89 0.86 244
5 0.95 1.00 0.97 257
accuracy 0.85 1512
macro avg 0.85 0.86 0.85 1512
weighted avg 0.84 0.85 0.85 1512
The confusion matrix as :
• DOMAIN: Autonomous Vehicles
• CONTEXT: A Recognising multi-digit numbers in photographs captured at street level is an important component of modern-day map making. A classic example of a corpus of such street-level photographs is Google’s Street View imagery composed of hundreds of millions of geo-located 360-degree panoramic images. The ability to automatically transcribe an address number from a geolocated patch of pixels and associate the transcribed number with a known street address helps pinpoint, with a high degree of accuracy, the location of the building it represents. More broadly, recognising numbers in photographs is a problem of interest to the optical character recognition community. While OCR on constrained domains like document processing is well studied, arbitrary multi-character text recognition in photographs is still highly challenging. This difficulty arises due to the wide variability in the visual appearance of text in the wild on account of a large range of fonts, colours, styles, orientations, and character arrangements. The recognition problem is further complicated by environmental factors such as lighting, shadows, specularity, and occlusions as well as by image acquisition factors such as resolution, motion, and focus blurs. In this project, we will use the dataset with images centred around a single digit (many of the images do contain some distractors at the sides). Although we are taking a sample of the data which is simpler, it is more complex than MNIST because of the distractors.
• DATA DESCRIPTION: The SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with the minimal requirement on data formatting but comes from a significantly harder, unsolved, real-world problem (recognising digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images.
Where the labels for each of this image are the prominent number in that image i.e. 2,6,7 and 4 respectively. The dataset has been provided in the form of h5py files. You can read about this file format here: https://docs.h5py.org/en/stable/ Acknowledgement: Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng Reading Digits in Natural Images with Unsupervised Feature Learning NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011.
• PROJECT OBJECTIVE : to build a digit classifier on the SVHN (Street View Housing Number) dataset.
import h5py
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau,EarlyStopping
import random
random.seed(42)
tf.random.set_seed(98)
def hist_plot(hist):
a = pd.DataFrame({'acc' : hist.history['accuracy']})
b = pd.DataFrame({'val_acc' : hist.history['val_accuracy']})
ab = pd.concat([a,b],axis=1)
ab.boxplot()
def loss_plot(history):
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training','validation'],loc='best')
plt.show()
def accuracy_plot(history):
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(['training','validation'],loc='best')
plt.show()
def evaluate(classifier):
eval = classifier.evaluate(X_test,y_test)
print("loss and accuracy are : " ,eval)
def metric_func(classifier):
y_pred=classifier.predict(X_test)
y_pred_final=[]
for i in y_pred:
y_pred_final.append(np.argmax(i))
print("The classification report is : \n",classification_report(Y_test,y_pred_final))
print("The confusion matrix as : \n ")
cm = confusion_matrix(Y_test,y_pred_final)
plt.figure(figsize=(20,20))
sns.heatmap(cm,annot=True,fmt=".1f")
checkpoint = ModelCheckpoint("model_weights.h5",monitor='val_accuracy',
save_weights_only=True, mode='max',verbose=1)
reduce_lr = ReduceLROnPlateau(monitor='val_loss',factor=0.1,patience=5,min_lr=0.000001,model='auto')
es = EarlyStopping(monitor='val_loss',patience=5)
earl_stp = [checkpoint,reduce_lr,es]
def model_dev(a=700,bn = False,do = False,add1 = False,add2 = False,add3 = False,add4 = False,b=None,c=None,d=None,e=None,act_func = 'relu',wt1 = 'glorot_uniform',opt = 'adam',epochs = 50,batch_size = 512,earl_stp = earl_stp):
model = Sequential()
model.add(Dense(a,input_shape=(ima_size,),activation = 'relu',kernel_initializer=wt1))
if bn :
model.add(BatchNormalization())
if do :
model.add(Dropout(0.5))
if add1 :
model.add(Dense(b,activation = act_func,kernel_initializer=wt1))
if bn :
model.add(BatchNormalization())
if do :
model.add(Dropout(0.3))
if add2 :
model.add(Dense(c,activation = act_func,kernel_initializer=wt1))
if bn :
model.add(BatchNormalization())
if do :
model.add(Dropout(0.2))
if add3 :
model.add(Dense(d,activation = act_func,kernel_initializer=wt1))
if add4 :
model.add(Dense(e,activation = act_func,kernel_initializer=wt1))
model.add(Dense(10,activation='softmax'))
model.compile(optimizer=opt,loss=tf.keras.losses.categorical_crossentropy,metrics=['accuracy'])
hist = model.fit(X_train,y_train,validation_split=0.2,epochs=epochs,callbacks=earl_stp,batch_size = batch_size,verbose = 0)
print("*-*"*30)
print("\n")
evaluate(model)
loss_plot(hist)
accuracy_plot(hist)
metric_func(model)
svhn = h5py.File("Autonomous_Vehicles_SVHN_single_grey1.h5","r")
svhn.keys()
<KeysViewHDF5 ['X_test', 'X_train', 'X_val', 'y_test', 'y_train', 'y_val']>
X_train = svhn['X_val']
X_test = svhn['X_test']
Y_train =svhn['y_val']
Y_test = svhn['y_test']
print(X_train.shape)
print(Y_train.shape)
print(X_test.shape)
print(Y_test.shape)
(60000, 32, 32) (60000,) (18000, 32, 32) (18000,)
plt.figure(figsize=(20, 10))
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(X_train[i], cmap="gray")
plt.axis('off')
plt.show()
print("labels for training images are : " , Y_train[:10])
labels for training images are : [0 0 0 0 0 0 0 0 0 0]
X_train.shape[0]
60000
X_train = np.array(X_train)
X_test = np.array(X_test)
X_train.shape,X_test.shape
((60000, 32, 32), (18000, 32, 32))
ima_size = 32*32
X_train = X_train.reshape(len(X_train),ima_size)
X_test = X_test.reshape(len(X_test),ima_size)
X_train.shape,X_test.shape
((60000, 1024), (18000, 1024))
# # normalize inputs from 0-255 to 0-1
X_train = X_train / 255.0
X_test = X_test / 255.0
print('Training set', X_train.shape, Y_train.shape)
print('Test set', X_test.shape, Y_test.shape)
Training set (60000, 1024) (60000,) Test set (18000, 1024) (18000,)
Y_train_1 = pd.DataFrame(Y_train)
Y_train_1.value_counts().sort_index()
0 6000 1 6000 2 6000 3 6000 4 6000 5 6000 6 6000 7 6000 8 6000 9 6000 dtype: int64
from tensorflow.keras.utils import to_categorical
num_classes = 10
y_train = to_categorical(Y_train,num_classes)
y_test = to_categorical(Y_test,num_classes)
y_train[:5]
array([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
print(y_train.shape)
print(y_test.shape)
(60000, 10) (18000, 10)
X_train.shape,y_train.shape
((60000, 1024), (60000, 10))
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.optimizers import Adam,SGD
model = Sequential()
model.add(Dense(700,input_shape=(ima_size,),activation = 'relu'))
model.add(Dense(10,activation='softmax'))
loss = tf.keras.losses.categorical_crossentropy
opt = tf.keras.optimizers.RMSprop(momentum=0.1)
model.compile(optimizer=opt,loss=loss,metrics=['accuracy'])
model.summary()
Model: "sequential_44"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_147 (Dense) (None, 700) 717500
dense_148 (Dense) (None, 10) 7010
=================================================================
Total params: 724,510
Trainable params: 724,510
Non-trainable params: 0
_________________________________________________________________
epochs = 50
batch_size = 64
h1 = model.fit(X_train,y_train,validation_split=0.2,epochs=30,callbacks=earl_stp)
Epoch 1/30 1499/1500 [============================>.] - ETA: 0s - loss: 2.1387 - accuracy: 0.2300 Epoch 1: saving model to model_weights.h5 1500/1500 [==============================] - 41s 26ms/step - loss: 2.1385 - accuracy: 0.2302 - val_loss: 2.0187 - val_accuracy: 0.1841 - lr: 0.0010 Epoch 2/30 1500/1500 [==============================] - ETA: 0s - loss: 1.5862 - accuracy: 0.4739 Epoch 2: saving model to model_weights.h5 1500/1500 [==============================] - 33s 22ms/step - loss: 1.5862 - accuracy: 0.4739 - val_loss: 1.5364 - val_accuracy: 0.4245 - lr: 0.0010 Epoch 3/30 1497/1500 [============================>.] - ETA: 0s - loss: 1.3411 - accuracy: 0.5696 Epoch 3: saving model to model_weights.h5 1500/1500 [==============================] - 40s 27ms/step - loss: 1.3415 - accuracy: 0.5694 - val_loss: 1.2484 - val_accuracy: 0.6028 - lr: 0.0010 Epoch 4/30 1500/1500 [==============================] - ETA: 0s - loss: 1.2177 - accuracy: 0.6146 Epoch 4: saving model to model_weights.h5 1500/1500 [==============================] - 40s 27ms/step - loss: 1.2177 - accuracy: 0.6146 - val_loss: 1.1241 - val_accuracy: 0.6453 - lr: 0.0010 Epoch 5/30 1498/1500 [============================>.] - ETA: 0s - loss: 1.1508 - accuracy: 0.6406 Epoch 5: saving model to model_weights.h5 1500/1500 [==============================] - 40s 27ms/step - loss: 1.1510 - accuracy: 0.6405 - val_loss: 1.0868 - val_accuracy: 0.6676 - lr: 0.0010 Epoch 6/30 1499/1500 [============================>.] - ETA: 0s - loss: 1.1037 - accuracy: 0.6580 Epoch 6: saving model to model_weights.h5 1500/1500 [==============================] - 33s 22ms/step - loss: 1.1037 - accuracy: 0.6580 - val_loss: 1.0998 - val_accuracy: 0.6611 - lr: 0.0010 Epoch 7/30 1500/1500 [==============================] - ETA: 0s - loss: 1.0666 - accuracy: 0.6719 Epoch 7: saving model to model_weights.h5 1500/1500 [==============================] - 29s 20ms/step - loss: 1.0666 - accuracy: 0.6719 - val_loss: 1.0207 - val_accuracy: 0.6860 - lr: 0.0010 Epoch 8/30 1499/1500 [============================>.] - ETA: 0s - loss: 1.0304 - accuracy: 0.6827 Epoch 8: saving model to model_weights.h5 1500/1500 [==============================] - 29s 20ms/step - loss: 1.0306 - accuracy: 0.6826 - val_loss: 1.0676 - val_accuracy: 0.6712 - lr: 0.0010 Epoch 9/30 1497/1500 [============================>.] - ETA: 0s - loss: 1.0015 - accuracy: 0.6947 Epoch 9: saving model to model_weights.h5 1500/1500 [==============================] - 31s 20ms/step - loss: 1.0027 - accuracy: 0.6944 - val_loss: 1.1204 - val_accuracy: 0.6546 - lr: 0.0010 Epoch 10/30 1500/1500 [==============================] - ETA: 0s - loss: 0.9800 - accuracy: 0.7018 Epoch 10: saving model to model_weights.h5 1500/1500 [==============================] - 31s 21ms/step - loss: 0.9800 - accuracy: 0.7018 - val_loss: 1.1141 - val_accuracy: 0.6698 - lr: 0.0010 Epoch 11/30 1498/1500 [============================>.] - ETA: 0s - loss: 0.9571 - accuracy: 0.7101 Epoch 11: saving model to model_weights.h5 1500/1500 [==============================] - 36s 24ms/step - loss: 0.9571 - accuracy: 0.7100 - val_loss: 0.9415 - val_accuracy: 0.7199 - lr: 0.0010 Epoch 12/30 1497/1500 [============================>.] - ETA: 0s - loss: 0.9382 - accuracy: 0.7175 Epoch 12: saving model to model_weights.h5 1500/1500 [==============================] - 32s 21ms/step - loss: 0.9380 - accuracy: 0.7175 - val_loss: 0.8661 - val_accuracy: 0.7477 - lr: 0.0010 Epoch 13/30 1498/1500 [============================>.] - ETA: 0s - loss: 0.9249 - accuracy: 0.7222 Epoch 13: saving model to model_weights.h5 1500/1500 [==============================] - 33s 22ms/step - loss: 0.9247 - accuracy: 0.7222 - val_loss: 0.8618 - val_accuracy: 0.7404 - lr: 0.0010 Epoch 14/30 1498/1500 [============================>.] - ETA: 0s - loss: 0.9083 - accuracy: 0.7257 Epoch 14: saving model to model_weights.h5 1500/1500 [==============================] - 34s 23ms/step - loss: 0.9078 - accuracy: 0.7258 - val_loss: 0.9098 - val_accuracy: 0.7314 - lr: 0.0010 Epoch 15/30 1499/1500 [============================>.] - ETA: 0s - loss: 0.8994 - accuracy: 0.7304 Epoch 15: saving model to model_weights.h5 1500/1500 [==============================] - 32s 22ms/step - loss: 0.8995 - accuracy: 0.7303 - val_loss: 0.9596 - val_accuracy: 0.7201 - lr: 0.0010 Epoch 16/30 1499/1500 [============================>.] - ETA: 0s - loss: 0.8885 - accuracy: 0.7337 Epoch 16: saving model to model_weights.h5 1500/1500 [==============================] - 33s 22ms/step - loss: 0.8890 - accuracy: 0.7336 - val_loss: 0.8925 - val_accuracy: 0.7347 - lr: 0.0010 Epoch 17/30 1500/1500 [==============================] - ETA: 0s - loss: 0.8799 - accuracy: 0.7376 Epoch 17: saving model to model_weights.h5 1500/1500 [==============================] - 36s 24ms/step - loss: 0.8799 - accuracy: 0.7376 - val_loss: 0.9683 - val_accuracy: 0.7129 - lr: 0.0010 Epoch 18/30 1499/1500 [============================>.] - ETA: 0s - loss: 0.8708 - accuracy: 0.7421 Epoch 18: saving model to model_weights.h5 1500/1500 [==============================] - 34s 23ms/step - loss: 0.8707 - accuracy: 0.7421 - val_loss: 0.8898 - val_accuracy: 0.7437 - lr: 0.0010
hist_plot(h1)
loss_plot(h1),accuracy_plot(h1)
(None, None)
- model performance is 76%.
- There is validation accuracy change is observed as the number of epochs going on.
- upto 3 epochs a huge loss is observed
- we can increase accuracy by following model_dev function,by incresing our ANN model complexity
model.evaluate(X_test,y_test)
563/563 [==============================] - 6s 9ms/step - loss: 0.8013 - accuracy: 0.7629
[0.8012673258781433, 0.7628889083862305]
metric_func(model)
563/563 [==============================] - 4s 7ms/step
The classification report is :
precision recall f1-score support
0 0.72 0.87 0.79 1814
1 0.71 0.86 0.78 1828
2 0.86 0.78 0.82 1803
3 0.56 0.82 0.67 1719
4 0.88 0.77 0.82 1812
5 0.80 0.69 0.74 1768
6 0.77 0.76 0.77 1832
7 0.85 0.80 0.83 1808
8 0.78 0.66 0.71 1812
9 0.89 0.61 0.73 1804
accuracy 0.76 18000
macro avg 0.78 0.76 0.76 18000
weighted avg 0.78 0.76 0.76 18000
The confusion matrix as :
model_dev()
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 Epoch 30: saving model to model_weights.h5 Epoch 31: saving model to model_weights.h5 Epoch 32: saving model to model_weights.h5 Epoch 33: saving model to model_weights.h5 Epoch 34: saving model to model_weights.h5 Epoch 35: saving model to model_weights.h5 Epoch 36: saving model to model_weights.h5 Epoch 37: saving model to model_weights.h5 Epoch 38: saving model to model_weights.h5 Epoch 39: saving model to model_weights.h5 Epoch 40: saving model to model_weights.h5 Epoch 41: saving model to model_weights.h5 Epoch 42: saving model to model_weights.h5 Epoch 43: saving model to model_weights.h5 Epoch 44: saving model to model_weights.h5 Epoch 45: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 3s 5ms/step - loss: 0.6009 - accuracy: 0.8287 loss and accuracy are : [0.6008739471435547, 0.8287222385406494]
563/563 [==============================] - 3s 4ms/step
The classification report is :
precision recall f1-score support
0 0.89 0.86 0.87 1814
1 0.73 0.91 0.81 1828
2 0.87 0.84 0.86 1803
3 0.69 0.84 0.76 1719
4 0.94 0.80 0.87 1812
5 0.86 0.77 0.81 1768
6 0.87 0.80 0.83 1832
7 0.87 0.86 0.86 1808
8 0.80 0.80 0.80 1812
9 0.85 0.79 0.82 1804
accuracy 0.83 18000
macro avg 0.84 0.83 0.83 18000
weighted avg 0.84 0.83 0.83 18000
The confusion matrix as :
model_dev(a = 700,add1 = True,b=350,act_func='relu')
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 Epoch 30: saving model to model_weights.h5 Epoch 31: saving model to model_weights.h5 Epoch 32: saving model to model_weights.h5 Epoch 33: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 5s 9ms/step - loss: 0.5369 - accuracy: 0.8371 loss and accuracy are : [0.5369101166725159, 0.8371111154556274]
563/563 [==============================] - 3s 6ms/step
The classification report is :
precision recall f1-score support
0 0.84 0.89 0.86 1814
1 0.79 0.90 0.84 1828
2 0.87 0.86 0.87 1803
3 0.73 0.85 0.78 1719
4 0.96 0.81 0.88 1812
5 0.83 0.82 0.83 1768
6 0.92 0.74 0.82 1832
7 0.91 0.85 0.88 1808
8 0.73 0.85 0.79 1812
9 0.87 0.79 0.83 1804
accuracy 0.84 18000
macro avg 0.85 0.84 0.84 18000
weighted avg 0.85 0.84 0.84 18000
The confusion matrix as :
model_dev(a = 700,add1 = True,b=350,add2 = True,c = 175,act_func='relu')
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 Epoch 30: saving model to model_weights.h5 Epoch 31: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 6s 10ms/step - loss: 0.4659 - accuracy: 0.8591 loss and accuracy are : [0.465922087430954, 0.8591111302375793]
563/563 [==============================] - 5s 9ms/step
The classification report is :
precision recall f1-score support
0 0.84 0.91 0.87 1814
1 0.78 0.94 0.85 1828
2 0.86 0.90 0.88 1803
3 0.77 0.87 0.81 1719
4 0.94 0.87 0.90 1812
5 0.89 0.79 0.84 1768
6 0.90 0.83 0.86 1832
7 0.91 0.89 0.90 1808
8 0.88 0.78 0.83 1812
9 0.87 0.81 0.84 1804
accuracy 0.86 18000
macro avg 0.86 0.86 0.86 18000
weighted avg 0.86 0.86 0.86 18000
The confusion matrix as :
model_dev(a = 700,add1 = True,b=500,add2 = True,c = 175,act_func='relu')
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 Epoch 30: saving model to model_weights.h5 Epoch 31: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 6s 10ms/step - loss: 0.4079 - accuracy: 0.8773 loss and accuracy are : [0.40786710381507874, 0.8773333430290222]
563/563 [==============================] - 4s 6ms/step
The classification report is :
precision recall f1-score support
0 0.87 0.93 0.90 1814
1 0.83 0.94 0.88 1828
2 0.87 0.92 0.90 1803
3 0.85 0.87 0.86 1719
4 0.91 0.92 0.91 1812
5 0.91 0.80 0.85 1768
6 0.89 0.86 0.88 1832
7 0.94 0.87 0.91 1808
8 0.90 0.78 0.84 1812
9 0.83 0.87 0.85 1804
accuracy 0.88 18000
macro avg 0.88 0.88 0.88 18000
weighted avg 0.88 0.88 0.88 18000
The confusion matrix as :
model_dev(a = 700,add1 = True,b=500,add2 = True,c = 500,act_func='relu')
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 Epoch 30: saving model to model_weights.h5 Epoch 31: saving model to model_weights.h5 Epoch 32: saving model to model_weights.h5 Epoch 33: saving model to model_weights.h5 Epoch 34: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 8s 14ms/step - loss: 0.4062 - accuracy: 0.8742 loss and accuracy are : [0.40621963143348694, 0.8742222189903259]
563/563 [==============================] - 7s 13ms/step
The classification report is :
precision recall f1-score support
0 0.87 0.94 0.90 1814
1 0.92 0.87 0.89 1828
2 0.87 0.92 0.90 1803
3 0.76 0.91 0.83 1719
4 0.91 0.90 0.90 1812
5 0.88 0.85 0.87 1768
6 0.89 0.83 0.86 1832
7 0.95 0.85 0.90 1808
8 0.82 0.83 0.82 1812
9 0.90 0.86 0.88 1804
accuracy 0.87 18000
macro avg 0.88 0.87 0.87 18000
weighted avg 0.88 0.87 0.87 18000
The confusion matrix as :
model_dev(a = 700,add1 = True,b=350,add2 = True,c = 175,add3 = True,d = 90,act_func='relu')
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 Epoch 30: saving model to model_weights.h5 Epoch 31: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 6s 11ms/step - loss: 0.4486 - accuracy: 0.8607 loss and accuracy are : [0.44858518242836, 0.8606666922569275]
563/563 [==============================] - 6s 11ms/step
The classification report is :
precision recall f1-score support
0 0.88 0.90 0.89 1814
1 0.79 0.93 0.85 1828
2 0.87 0.90 0.88 1803
3 0.82 0.84 0.83 1719
4 0.92 0.87 0.90 1812
5 0.88 0.80 0.84 1768
6 0.86 0.85 0.86 1832
7 0.85 0.93 0.89 1808
8 0.84 0.79 0.82 1812
9 0.91 0.78 0.84 1804
accuracy 0.86 18000
macro avg 0.86 0.86 0.86 18000
weighted avg 0.86 0.86 0.86 18000
The confusion matrix as :
model_dev(a = 700,add1 = True,b=350,add2 = True,c = 700,add3 = True,d = 90,act_func='relu')
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 Epoch 30: saving model to model_weights.h5 Epoch 31: saving model to model_weights.h5 Epoch 32: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 5s 9ms/step - loss: 0.4118 - accuracy: 0.8696 loss and accuracy are : [0.41178199648857117, 0.8695555329322815]
563/563 [==============================] - 5s 8ms/step
The classification report is :
precision recall f1-score support
0 0.88 0.93 0.91 1814
1 0.87 0.89 0.88 1828
2 0.81 0.91 0.86 1803
3 0.88 0.84 0.86 1719
4 0.90 0.90 0.90 1812
5 0.83 0.87 0.85 1768
6 0.89 0.86 0.87 1832
7 0.87 0.90 0.88 1808
8 0.93 0.75 0.83 1812
9 0.86 0.85 0.85 1804
accuracy 0.87 18000
macro avg 0.87 0.87 0.87 18000
weighted avg 0.87 0.87 0.87 18000
The confusion matrix as :
model_dev(a = 700,add1 = True,b=500,add2 = True,c = 500,act_func='relu',wt1='he_uniform')
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 4s 8ms/step - loss: 0.3879 - accuracy: 0.8776 loss and accuracy are : [0.38789618015289307, 0.8776111006736755]
563/563 [==============================] - 5s 8ms/step
The classification report is :
precision recall f1-score support
0 0.81 0.95 0.88 1814
1 0.86 0.93 0.89 1828
2 0.95 0.88 0.91 1803
3 0.86 0.84 0.85 1719
4 0.91 0.91 0.91 1812
5 0.81 0.89 0.85 1768
6 0.89 0.85 0.87 1832
7 0.95 0.89 0.92 1808
8 0.92 0.78 0.85 1812
9 0.85 0.86 0.85 1804
accuracy 0.88 18000
macro avg 0.88 0.88 0.88 18000
weighted avg 0.88 0.88 0.88 18000
The confusion matrix as :
es2 = EarlyStopping(monitor='val_accuracy',patience=5,restore_best_weights=True)
model_dev(a = 700,add1 = True,b=700,add2 = True,c = 500,act_func='relu',opt=tf.keras.optimizers.Adam(amsgrad=True,use_ema=True),earl_stp = earl_stp,batch_size=500,epochs=150)
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 Epoch 30: saving model to model_weights.h5 Epoch 31: saving model to model_weights.h5 Epoch 32: saving model to model_weights.h5 Epoch 33: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 7s 12ms/step - loss: 0.2747 - accuracy: 0.9201 loss and accuracy are : [0.2746742069721222, 0.9200555682182312]
563/563 [==============================] - 5s 8ms/step
The classification report is :
precision recall f1-score support
0 0.90 0.96 0.93 1814
1 0.89 0.95 0.92 1828
2 0.93 0.95 0.94 1803
3 0.89 0.91 0.90 1719
4 0.94 0.94 0.94 1812
5 0.92 0.90 0.91 1768
6 0.93 0.90 0.92 1832
7 0.95 0.93 0.94 1808
8 0.91 0.88 0.89 1812
9 0.93 0.89 0.91 1804
accuracy 0.92 18000
macro avg 0.92 0.92 0.92 18000
weighted avg 0.92 0.92 0.92 18000
The confusion matrix as :
model_dev(a = 1000,add1 = True,b=500,add2 = True,c = 250,add3 = True,d = 150,act_func='relu',opt=tf.keras.optimizers.Adam(amsgrad=True,use_ema=True),earl_stp = earl_stp,batch_size=500,epochs=150)
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 Epoch 30: saving model to model_weights.h5 Epoch 31: saving model to model_weights.h5 Epoch 32: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 7s 11ms/step - loss: 0.3122 - accuracy: 0.9082 loss and accuracy are : [0.31217652559280396, 0.9081666469573975]
563/563 [==============================] - 5s 8ms/step
The classification report is :
precision recall f1-score support
0 0.89 0.95 0.92 1814
1 0.89 0.95 0.92 1828
2 0.92 0.94 0.93 1803
3 0.87 0.90 0.88 1719
4 0.94 0.92 0.93 1812
5 0.90 0.87 0.89 1768
6 0.92 0.89 0.90 1832
7 0.94 0.92 0.93 1808
8 0.91 0.86 0.88 1812
9 0.91 0.87 0.89 1804
accuracy 0.91 18000
macro avg 0.91 0.91 0.91 18000
weighted avg 0.91 0.91 0.91 18000
The confusion matrix as :
model_dev(a = 1500,add1 = True,b=750,add2 = True,c = 500,add3 = True,d = 250,add4 = True,e = 100,act_func='relu',opt=tf.keras.optimizers.Adam(amsgrad=True,use_ema=True),earl_stp = earl_stp,batch_size=500,epochs=150)
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 Epoch 30: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 8s 14ms/step - loss: 0.2726 - accuracy: 0.9204 loss and accuracy are : [0.2726195454597473, 0.9203888773918152]
563/563 [==============================] - 7s 12ms/step
The classification report is :
precision recall f1-score support
0 0.91 0.96 0.94 1814
1 0.90 0.96 0.93 1828
2 0.94 0.95 0.95 1803
3 0.89 0.93 0.91 1719
4 0.95 0.93 0.94 1812
5 0.92 0.89 0.90 1768
6 0.92 0.90 0.91 1832
7 0.95 0.92 0.93 1808
8 0.91 0.87 0.89 1812
9 0.93 0.89 0.91 1804
accuracy 0.92 18000
macro avg 0.92 0.92 0.92 18000
weighted avg 0.92 0.92 0.92 18000
The confusion matrix as :
model_dev(a = 700,add1 = True,b=700,add2 = True,c = 500,add3=True,d=500,act_func='relu',opt=tf.keras.optimizers.Adam(amsgrad=True,use_ema=True),earl_stp = earl_stp,batch_size=500,epochs=150)
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 Epoch 27: saving model to model_weights.h5 Epoch 28: saving model to model_weights.h5 Epoch 29: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 10s 16ms/step - loss: 0.2410 - accuracy: 0.9315 loss and accuracy are : [0.24102197587490082, 0.9315000176429749]
563/563 [==============================] - 9s 15ms/step
The classification report is :
precision recall f1-score support
0 0.93 0.96 0.95 1814
1 0.91 0.96 0.93 1828
2 0.95 0.96 0.96 1803
3 0.90 0.93 0.92 1719
4 0.95 0.94 0.95 1812
5 0.92 0.91 0.91 1768
6 0.94 0.91 0.93 1832
7 0.96 0.93 0.95 1808
8 0.92 0.90 0.91 1812
9 0.93 0.91 0.92 1804
accuracy 0.93 18000
macro avg 0.93 0.93 0.93 18000
weighted avg 0.93 0.93 0.93 18000
The confusion matrix as :
model_dev(a = 700,add1 = True,b=700,add2 = True,c = 500,add3=True,d=500,act_func='relu',opt=tf.keras.optimizers.Adam(amsgrad=True,use_ema=True),earl_stp = earl_stp,batch_size=1000,epochs=150,wt1 = 'he_uniform')
Epoch 1: saving model to model_weights.h5 Epoch 2: saving model to model_weights.h5 Epoch 3: saving model to model_weights.h5 Epoch 4: saving model to model_weights.h5 Epoch 5: saving model to model_weights.h5 Epoch 6: saving model to model_weights.h5 Epoch 7: saving model to model_weights.h5 Epoch 8: saving model to model_weights.h5 Epoch 9: saving model to model_weights.h5 Epoch 10: saving model to model_weights.h5 Epoch 11: saving model to model_weights.h5 Epoch 12: saving model to model_weights.h5 Epoch 13: saving model to model_weights.h5 Epoch 14: saving model to model_weights.h5 Epoch 15: saving model to model_weights.h5 Epoch 16: saving model to model_weights.h5 Epoch 17: saving model to model_weights.h5 Epoch 18: saving model to model_weights.h5 Epoch 19: saving model to model_weights.h5 Epoch 20: saving model to model_weights.h5 Epoch 21: saving model to model_weights.h5 Epoch 22: saving model to model_weights.h5 Epoch 23: saving model to model_weights.h5 Epoch 24: saving model to model_weights.h5 Epoch 25: saving model to model_weights.h5 Epoch 26: saving model to model_weights.h5 *-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-**-* 563/563 [==============================] - 9s 16ms/step - loss: 0.3157 - accuracy: 0.9077 loss and accuracy are : [0.31573137640953064, 0.9076666831970215]
563/563 [==============================] - 8s 15ms/step
The classification report is :
precision recall f1-score support
0 0.89 0.95 0.92 1814
1 0.87 0.94 0.91 1828
2 0.92 0.94 0.93 1803
3 0.87 0.89 0.88 1719
4 0.94 0.92 0.93 1812
5 0.91 0.87 0.89 1768
6 0.93 0.87 0.90 1832
7 0.93 0.93 0.93 1808
8 0.91 0.87 0.89 1812
9 0.92 0.88 0.90 1804
accuracy 0.91 18000
macro avg 0.91 0.91 0.91 18000
weighted avg 0.91 0.91 0.91 18000
The confusion matrix as :
- we are getting a better accuracy upto 93% and most likely to be generalized model.
- There are more number of misclassifications happened for,
- '3' classified as '5'
- '6' classified as '8'
- '1' classified as '7'